-
Notifications
You must be signed in to change notification settings - Fork 241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(datahub-system-update-job): enable SPRING_KAFKA_PROPERTIES_AUTO_REGISTER_SCHEMAS
#358
Conversation
Error serializing Avro message
…EGISTER_SCHEMAS `SPRING_KAFKA_PROPERTIES_AUTO_REGISTER_SCHEMAS` should be true to solve the following error in datahub-system-update-job ```log 2023-08-28 01:50:42,180 [main] ERROR c.l.d.u.s.e.steps.DataHubStartupStep:40 - DataHubStartupStep failed. org.apache.kafka.common.errors.SerializationException: Error serializing Avro message Caused by: java.io.IOException: No schema registered under subject! at io.confluent.kafka.schemaregistry.client.MockSchemaRegistryClient.getLatestVersion(MockSchemaRegistryClient.java:261) at io.confluent.kafka.schemaregistry.client.MockSchemaRegistryClient.getLatestSchemaMetadata(MockSchemaRegistryClient.java:310) at io.confluent.kafka.serializers.AbstractKafkaSchemaSerDe.lookupLatestVersion(AbstractKafkaSchemaSerDe.java:181) at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:77) at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:59) at org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:62) at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:902) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862) at com.linkedin.metadata.dao.producer.KafkaEventProducer.produceDataHubUpgradeHistoryEvent(KafkaEventProducer.java:171) at com.linkedin.datahub.upgrade.system.elasticsearch.steps.DataHubStartupStep.lambda$executable$0(DataHubStartupStep.java:37) at com.linkedin.datahub.upgrade.impl.DefaultUpgradeManager.executeStepInternal(DefaultUpgradeManager.java:110) at com.linkedin.datahub.upgrade.impl.DefaultUpgradeManager.executeInternal(DefaultUpgradeManager.java:68) at com.linkedin.datahub.upgrade.impl.DefaultUpgradeManager.executeInternal(DefaultUpgradeManager.java:42) at com.linkedin.datahub.upgrade.impl.DefaultUpgradeManager.execute(DefaultUpgradeManager.java:33) at com.linkedin.datahub.upgrade.UpgradeCli.run(UpgradeCli.java:80) at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:768) at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:752) at org.springframework.boot.SpringApplication.run(SpringApplication.java:314) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:164) at com.linkedin.datahub.upgrade.UpgradeCliApplication.main(UpgradeCliApplication.java:23) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) ``` The related slack threads: - https://datahubspace.slack.com/archives/C029A3M079U/p1693269038664329?thread_ts=1692266022.927369&cid=C029A3M079U - https://datahubspace.slack.com/archives/CV2UVAPPG/p1693424025572079
Error serializing Avro message
SPRING_KAFKA_PROPERTIES_AUTO_REGISTER_SCHEMAS
Can you post your values files when encountering this error? This change doesn't make sense, that section of the k8 manifest is only activated when running GMS with an internal schema registry. This registry is read-only and doesn't support registering schemas. It should not be enabled when configured for INTERNAL, see conditional here. |
Thank you for your reply! This datahub-0.2.181 value file. It uses INTERNAL one. You should be able to reproduce the error. |
* feat: values for PSQL db name, elastic prefix (acryldata#313) * Values for PSQL database name, elastic prefix --------- Co-authored-by: david-leifker <[email protected]> * feat(search,schema-registry): updates for v0.10.3 release (acryldata#311) * Update kafka chart to 22.1.3 for kafka 3.4.0 (acryldata#316) * feat: add parameters to cleanupJob resources settings (acryldata#317) * feat: Add ability to specify extraPodLabels per deployment (acryldata#310) * feat: Add ability to specify extraPodLabels per deployment * Update Chart.yaml --------- Co-authored-by: jorrick <[email protected]> Co-authored-by: david-leifker <[email protected]> * fix: use common labels for (Cron)Jobs (acryldata#303) * fix(datahub): use common labels for (Cron)Jobs * chore: update version --------- Co-authored-by: Matthijs van der Loos <[email protected]> Co-authored-by: david-leifker <[email protected]> * fix: add missing global values to subchart values (acryldata#302) * fix(datahub): add missing global values to subchart values * chore: update versions --------- Co-authored-by: Matthijs van der Loos <[email protected]> Co-authored-by: david-leifker <[email protected]> * feat: allow pulling ebean username from secrets alongside password (acryldata#291) * chore(secrets): use configurable refs instead of fixed names (acryldata#323) * chore(secrets): use configurable refs instead of fixed names * Update Chart.yaml --------- Co-authored-by: david-leifker <[email protected]> * Update Default version to v0.10.4 (acryldata#330) * fix: Fixed indentation in datahub-cleanup-job-template.yml (acryldata#328) * feat(healthcheck): use new healthcheck endpoint for GMS (acryldata#331) Co-authored-by: Indy Prentice <[email protected]> * chore(version): version bump & indent (acryldata#324) * feat: Default User Credentials (acryldata#321) Co-authored-by: david-leifker <[email protected]> * feat(cloud-sql-proxy): add support for running gcloud sql proxy as prerequisite (acryldata#332) * feat: allow for overriding job annotations and adding init containers (acryldata#315) * allow for overriding hook annotations * allow for specifying init containers on all jobs --------- Signed-off-by: David van der Spek <[email protected]> * feat(cron): Adding more parameters to ingestion-cron (acryldata#336) * feat(cron): Adding more parameters to ingestion-cron --------- Co-authored-by: david-leifker <[email protected]> * Update charts to include search and browse env variable flags (acryldata#337) * Update charts to include search and browse env variable flags * fix(config) Set search and browse flags default off (acryldata#339) * fix(config) Set search and browse flags default off * feat(cron): support nodeselector, affinity and toleration capabilities (acryldata#342) * feat : add tolerations parameter in datahub-ingestion-cron chart --------- Co-authored-by: david-leifker <[email protected]> * Helm update for 0.10.5 release (acryldata#346) * Helm update for 0.10.5 release * use latest point release for ingestion * docs(readme): document secrets randomization (acryldata#350) * docs(readme): Add notes about randomized keys and credentials * fix(auth-secrets): fix system update secrets (acryldata#351) * fix(auth-secret): remove auth secret from common template, cannot be used by all jobs * fix(datahub-system-update-job): enable `SPRING_KAFKA_PROPERTIES_AUTO_REGISTER_SCHEMAS` (acryldata#358) * fix(datahub-system-update-job): enable SPRING_KAFKA_PROPERTIES_AUTO_REGISTER_SCHEMAS * feat(session): add session duration configuration (acryldata#361) * feat(session): add session duration configuration * fix(ingestion-cron): fix indentation in ingestion cron template (acryldata#356) * fix: remove unused and irrelevant sidecar configuration * fix: correct sidecar configuration in cron job template * fix: update ingestion cron sidecar parameter in README --------- Co-authored-by: david-leifker <[email protected]> * docs(ingestion-cron): add documentation for ingestion cron job values (acryldata#355) * Helm changes for 0.11.0 release (acryldata#366) Co-authored-by: Indy Prentice <[email protected]> * Release v0.11.0 updates (acryldata#367) * feat(release): updates for release v0.11.0 --------- Co-authored-by: KonstantinVishnivetskii <[email protected]> Co-authored-by: david-leifker <[email protected]> Co-authored-by: Jinlin Yang <[email protected]> Co-authored-by: Álvaro González <[email protected]> Co-authored-by: Jorrick Sleijster <[email protected]> Co-authored-by: jorrick <[email protected]> Co-authored-by: Matthijs van der Loos <[email protected]> Co-authored-by: Matthijs van der Loos <[email protected]> Co-authored-by: Max Pospischil <[email protected]> Co-authored-by: Sergio Gómez Villamor <[email protected]> Co-authored-by: Pedro Silva <[email protected]> Co-authored-by: TusharM <[email protected]> Co-authored-by: Indy Prentice <[email protected]> Co-authored-by: Indy Prentice <[email protected]> Co-authored-by: seokyun.ha <[email protected]> Co-authored-by: Tony Ouyang <[email protected]> Co-authored-by: David van der Spek <[email protected]> Co-authored-by: miguelbirdie <[email protected]> Co-authored-by: Chris Collins <[email protected]> Co-authored-by: sachinsaju <[email protected]> Co-authored-by: Kohei Watanabe <[email protected]> Co-authored-by: RyanHolstien <[email protected]>
Deploying datahub-next failed with the error reported in acryldata/datahub-helm#358 If we confirm that this patch fixes the deployment, we will change the hardcoded value of this environment variable in the chart. For now, we set it as an override to the datahub-next deployment, to avoid causing issues with the production release. Bug: T363461 Change-Id: I2707467c98a8fe1a50ba2e0d3d0bd072c98648c7
Resolves: #347
I think this should be true to solve the following error:
The related slack threads:
Checklist